Goto

Collaborating Authors

 tracker model


Self-SupervisedMulti-ObjectTrackingwithCross-InputConsistency (SupplementaryMaterial) FavyenBastani,Songtao He,SamMadden

Neural Information Processing Systems

For each training sequence hI0,...,Ini, Only-Occlusion randomly selects four indexes 0 < k1 k2 < k3 k4 < n to construct two disjoint frame subsequences hIk1,...,Ik2i and hIk3,...,Ik4i. Learning to merely compare detection features across consecutive frames would yield low accuracy since features in occluded frames are not observed. This strategy yields high consistency because it is unaffected by occluded intermediate frames. We select two indexes 0 < k5,k6 < n. Then, we randomly pick k5 and k6 such that k3 k5 k4 and k1 k6 k2, i.e., the hand-off for one tracker occurs when the other tracker observes a simulated occlusion.



Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Shan, Shawn, Wenger, Emily, Zhang, Jiayun, Li, Huiying, Zheng, Haitao, Zhao, Ben Y.

arXiv.org Machine Learning

Today's proliferation of powerful facial recognition models poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data, and train highly accurate facial recognition models of us without our knowledge. We need tools to protect ourselves from unauthorized facial recognition systems and their numerous potential misuses. Unfortunately, work in related areas are limited in practicality and effectiveness. In this paper, we propose Fawkes, a system that allow individuals to inoculate themselves against unauthorized facial recognition models. Fawkes achieves this by helping users adding imperceptible pixel-level changes (we call them "cloaks") to their own photos before publishing them online. When collected by a third-party "tracker" and used to train facial recognition models, these "cloaked" images produce functional models that consistently misidentify the user. We experimentally prove that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are "leaked" to the tracker and used for training, Fawkes can still maintain a 80+% protection success rate. In fact, we perform real experiments against today's state-of-the-art facial recognition services and achieve 100% success. Finally, we show that Fawkes is robust against a variety of countermeasures that try to detect or disrupt cloaks.